Comparative Evaluation of Generalized ADALINE using Variable Learning Rate Parameter
نویسندگان
چکیده
The ADAptive LINear Element (ADALINE) neural network uses Least Mean Square (LMS) learning rule. This paper presents a comprehensive comparison of three different variable learning rate (VLR) parameter LMS algorithms, for the generalized ADALINE neural network paradigms. These algorithms are used to adjust the weights of the ADALINE neural network, which are tested under three different applications: adaptive prediction, system identification and noise cancellation. The performance analysis of the algorithms for different scenarios is discussed with the help of computer simulations. The simulation results show that in the initial stage, the mean square error between the network output and the desired output is less for the algorithm that has the fastest convergence rate. Therefore, the major advantage is faster convergence of synaptic weights towards the optimum solution, in addition to the better tracking performance in different applications.
منابع مشابه
The Backpropagation Algorithm Functions for the Multilayer Perceptron
The attempts for solving linear unseparable problems have led to different variations on the number of layers of neurons and activation functions used. The backpropagation algorithm is the most known and used supervised learning algorithm. Also called the generalized delta algorithm because it expands the training way of the adaline network, it is based on minimizing the difference between the ...
متن کاملAn Evaluation of an Adaptive Generalized Likelihood Ratio Charts for Monitoring the Process Mean
When the objective is quick detection both small and large shifts in the process mean with normal distribution, the generalized likelihood ratio (GLR) control charts have better performance as compared to other control charts. Only the fixed parameters are used in Reynolds and Lou’s presented charts. According to the studies, using variable parameters, detect process shifts faster than fixed pa...
متن کاملGTE TR88-509.4 NADALINE: A Normalized Adaptive Linear Element that Learns Efficiently
This paper introduces a variant of the ADALINE in which the input signals are normalized to have zero mean and unit variance, and in which the bias or “threshold weight” is learned slightly differently. These changes result in a linear learning element that learns much more efficiently and rapidly, and that is much less dependent on the choice of the step-size parameter. Using simulation experi...
متن کاملDynamical and stationary properties of on-line learning from finite training sets.
The dynamical and stationary properties of on-line learning from finite training sets are analyzed by using the cavity method. For large input dimensions, we derive equations for the macroscopic parameters, namely, the student-teacher correlation, the student-student autocorrelation and the learning force fluctuation. This enables us to provide analytical solutions to Adaline learning as a benc...
متن کاملImproved Adaline Networks for Robust Pattern Classification
The Adaline network [1] is a classic neural architecture whose learning rule is the famous least mean squares (LMS) algorithm (a.k.a. delta rule or Widrow-Hoff rule). It has been demonstrated that the LMS algorithm is optimal in H∞ sense since it tolerates small (in energy) disturbances, such as measurement noise, parameter drifting and modelling errors [2,3]. Such optimality of the LMS algorit...
متن کامل